On Solving Nonconvex Optimization Problems by Reducing The Duality Gap
نویسنده
چکیده
Lagrangian bounds, i.e. bounds computed by Lagrangian relaxation, have been used successfully in branch and bound bound methods for solving certain classes of nonconvex optimization problems by reducing the duality gap. We discuss this method for the class of partly linear and partly convex optimization problems and, incidentally, point out incorrect results in the recent literature on this subject.
منابع مشابه
An Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems
By p-power (or partial p-power) transformation, the Lagrangian function in nonconvex optimization problem becomes locally convex. In this paper, we present a neural network based on an NCP function for solving the nonconvex optimization problem. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...
متن کاملBenson's algorithm for nonconvex multiobjective problems via nonsmooth Wolfe duality
In this paper, we propose an algorithm to obtain an approximation set of the (weakly) nondominated points of nonsmooth multiobjective optimization problems with equality and inequality constraints. We use an extension of the Wolfe duality to construct the separating hyperplane in Benson's outer algorithm for multiobjective programming problems with subdifferentiable functions. We also fo...
متن کاملSolutions and optimality criteria for nonconvex constrained global optimization problems with connections between canonical and Lagrangian duality
Abstract This paper presents a canonical duality theory for solving a general nonconvex 1 quadratic minimization problem with nonconvex constraints. By using the canonical dual 2 transformation developed by the first author, the nonconvex primal problem can be con3 verted into a canonical dual problem with zero duality gap. A general analytical solution 4 form is obtained. Both global and local...
متن کاملCanonical dual solutions to nonconvex radial basis neural network optimization problem
Radial Basis Functions Neural Networks (RBFNNs) are tools widely used in regression problems. One of their principal drawbacks is that the formulation corresponding to the training with the supervision of both the centers and the weights is a highly non-convex optimization problem, which leads to some fundamentally difficulties for traditional optimization theory and methods. This paper present...
متن کاملColumn Generation based Alternating Direction Methods for solving MINLPs
Traditional decomposition based branch-and-bound algorithms, like branch-and-price, can be very e cient if the duality gap is not too large. However, if this is not the case, the branchand-bound tree may grow rapidly, preventing the method to nd a good solution. In this paper, we present a new decompositon algorithm, called ADGO (Alternating Direction Global Optimization algorithm), for globall...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- J. Global Optimization
دوره 32 شماره
صفحات -
تاریخ انتشار 2005